在现有方法中,LIDAR的探测器显示出卓越的性能,但视觉探测器仍被广泛用于其价格优势。从惯例上讲,视觉检验的任务主要依赖于连续图像的输入。但是,探测器网络学习图像提供的异性几何信息非常复杂。在本文中,将伪LIDAR的概念引入了探测器中以解决此问题。伪LIDAR点云背面项目由图像生成的深度图中的3D点云,这改变了图像表示的方式。与立体声图像相比,立体声匹配网络生成的伪lidar点云可以得到显式的3D坐标。由于在3D空间中发生了6个自由度(DOF)姿势转换,因此伪宽点云提供的3D结构信息比图像更直接。与稀疏的激光雷达相比,伪驱动器具有较密集的点云。为了充分利用伪LIDAR提供的丰富点云信息,采用了投射感知的探测管道。以前的大多数基于激光雷达的算法从点云中采样了8192点,作为探视网络的输入。投影感知的密集探测管道采用从图像产生的所有伪lidar点云,除了误差点作为网络的输入。在图像中充分利用3D几何信息时,图像中的语义信息也用于探视任务中。 2D-3D的融合是在仅基于图像的进程中实现的。 Kitti数据集的实验证明了我们方法的有效性。据我们所知,这是使用伪LIDAR的第一种视觉探光法。
translated by 谷歌翻译
开放世界对象检测(OWOD)是一个具有挑战性的计算机视觉问题,需要检测未知对象并逐渐学习已确定的未知类别。但是,它不能将未知实例区分为多个未知类。在这项工作中,我们提出了一个新颖的OWOD问题,称为未知分类的开放世界对象检测(UC-OWOD)。 UC-OWOD旨在检测未知实例并将其分类为不同的未知类别。此外,我们制定问题并设计一个两阶段的对象检测器来解决UC-OWOD。首先,使用未知的标签意见建议和未知歧视性分类头用于检测已知和未知对象。然后,构建基于相似性的未知分类和未知聚类改进模块,以区分多个未知类别。此外,设计了两个新颖的评估方案,以评估未知类别的检测。丰富的实验和可视化证明了该方法的有效性。代码可在https://github.com/johnwuzh/uc-owod上找到。
translated by 谷歌翻译
Generative models have been widely applied to solve extractive tasks, where parts of the input is extracted to form the desired output, and achieved significant success. For example, in extractive question answering (QA), generative models have constantly yielded state-of-the-art results. In this work, we identify the issue of tokenization inconsistency that is commonly neglected in training these models. This issue damages the extractive nature of these tasks after the input and output are tokenized inconsistently by the tokenizer, and thus leads to performance drop as well as hallucination. We propose a simple yet effective fix to this issue and conduct a case study on extractive QA. We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1.7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets. Further, the model converges faster, and becomes less likely to generate out-of-context answers. With these findings, we would like to call for more attention on how tokenization should be done when solving extractive tasks and recommend applying consistent tokenization during training.
translated by 谷歌翻译
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022). However, existing methods typically encode task information with a simple dataset name as a prefix to the encoder. This not only limits the effectiveness of multi-task learning, but also hinders the model's ability to generalize to new domains or tasks that were not seen during training, which is crucial for real-world applications. In this paper, we propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization of unified models. We design the task configurations to explicitly specify the task type, as well as its input and output types. We show that this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations that apply novel input-output combinations in a zero-shot manner. We demonstrate via experiments over ten table-to-text tasks that our method outperforms the UnifiedSKG baseline by noticeable margins in both in-domain and zero-shot settings, with average improvements of +0.5 and +12.6 from using a T5-large backbone, respectively.
translated by 谷歌翻译
Spurious correlations in training data often lead to robustness issues since models learn to use them as shortcuts. For example, when predicting whether an object is a cow, a model might learn to rely on its green background, so it would do poorly on a cow on a sandy background. A standard dataset for measuring state-of-the-art on methods mitigating this problem is Waterbirds. The best method (Group Distributionally Robust Optimization - GroupDRO) currently achieves 89\% worst group accuracy and standard training from scratch on raw images only gets 72\%. GroupDRO requires training a model in an end-to-end manner with subgroup labels. In this paper, we show that we can achieve up to 90\% accuracy without using any sub-group information in the training set by simply using embeddings from a large pre-trained vision model extractor and training a linear classifier on top of it. With experiments on a wide range of pre-trained models and pre-training datasets, we show that the capacity of the pre-training model and the size of the pre-training dataset matters. Our experiments reveal that high capacity vision transformers perform better compared to high capacity convolutional neural networks, and larger pre-training dataset leads to better worst-group accuracy on the spurious correlation dataset.
translated by 谷歌翻译
Machine learning models have been found to learn shortcuts -- unintended decision rules that are unable to generalize -- undermining models' reliability. Previous works address this problem under the tenuous assumption that only a single shortcut exists in the training data. Real-world images are rife with multiple visual cues from background to texture. Key to advancing the reliability of vision systems is understanding whether existing methods can overcome multiple shortcuts or struggle in a Whac-A-Mole game, i.e., where mitigating one shortcut amplifies reliance on others. To address this shortcoming, we propose two benchmarks: 1) UrbanCars, a dataset with precisely controlled spurious cues, and 2) ImageNet-W, an evaluation set based on ImageNet for watermark, a shortcut we discovered affects nearly every modern vision model. Along with texture and background, ImageNet-W allows us to study multiple shortcuts emerging from training on natural images. We find computer vision models, including large foundation models -- regardless of training set, architecture, and supervision -- struggle when multiple shortcuts are present. Even methods explicitly designed to combat shortcuts struggle in a Whac-A-Mole dilemma. To tackle this challenge, we propose Last Layer Ensemble, a simple-yet-effective method to mitigate multiple shortcuts without Whac-A-Mole behavior. Our results surface multi-shortcut mitigation as an overlooked challenge critical to advancing the reliability of vision systems. The datasets and code are released: https://github.com/facebookresearch/Whac-A-Mole.git.
translated by 谷歌翻译
This is a brief technical report of our proposed method for Multiple-Object Tracking (MOT) Challenge in Complex Environments. In this paper, we treat the MOT task as a two-stage task including human detection and trajectory matching. Specifically, we designed an improved human detector and associated most of detection to guarantee the integrity of the motion trajectory. We also propose a location-wise matching matrix to obtain more accurate trace matching. Without any model merging, our method achieves 66.672 HOTA and 93.971 MOTA on the DanceTrack challenge dataset.
translated by 谷歌翻译
This paper focuses on the prevalent performance imbalance in the stages of incremental learning. To avoid obvious stage learning bottlenecks, we propose a brand-new stage-isolation based incremental learning framework, which leverages a series of stage-isolated classifiers to perform the learning task of each stage without the interference of others. To be concrete, to aggregate multiple stage classifiers as a uniform one impartially, we first introduce a temperature-controlled energy metric for indicating the confidence score levels of the stage classifiers. We then propose an anchor-based energy self-normalization strategy to ensure the stage classifiers work at the same energy level. Finally, we design a voting-based inference augmentation strategy for robust inference. The proposed method is rehearsal free and can work for almost all continual learning scenarios. We evaluate the proposed method on four large benchmarks. Extensive results demonstrate the superiority of the proposed method in setting up new state-of-the-art overall performance. \emph{Code is available at} \url{https://github.com/iamwangyabin/ESN}.
translated by 谷歌翻译
Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs). However, this approach is typically more expensive than traditional fine-tuning because of the necessity to generate adversarial examples via gradient descent. Delving into the optimization process of adversarial training, we find that robust connectivity patterns emerge in the early training phase (typically $0.15\sim0.3$ epochs), far before parameters converge. Inspired by this finding, we dig out robust early-bird tickets (i.e., subnetworks) to develop an efficient adversarial training method: (1) searching for robust tickets with structured sparsity in the early stage; (2) fine-tuning robust tickets in the remaining time. To extract the robust tickets as early as possible, we design a ticket convergence metric to automatically terminate the searching process. Experiments show that the proposed efficient adversarial training method can achieve up to $7\times \sim 13 \times$ training speedups while maintaining comparable or even better robustness compared to the most competitive state-of-the-art adversarial training methods.
translated by 谷歌翻译
场景流表示场景中每个点的3D运动,该动作明确描述了每个点运动的距离和方向。场景流估计用于各种应用,例如自主驾驶场,活动识别和虚拟现实字段。由于对现实世界数据的地面真理的注释场景流动是一项挑战,因此没有可用的现实数据集可提供大量数据,并具有地面真相以进行场景流量估计。因此,许多作品使用合成的数据将其网络和现实世界中的LIDAR数据预先培训。与以前的无监督学习场景流程中的云中的学习流程不同,我们建议使用探空仪信息来帮助无监督的场景流程学习,并使用现实世界中的激光雷达数据来训练我们的网络。有监督的探测器为场景流提供了更准确的共享成本量。此外,拟议的网络具有掩模加权的经线层,以获得更准确的预测点云。经线操作意味着将估计的姿势转换或场景流到源点云中以获得预测的点云,这是精炼场景从粗糙到细小的关键。执行翘曲操作时,不同状态中的点使用不同的权重进行姿势转换和场景流动转换。我们将点状态分类为静态,动态和遮挡,其中静态掩模用于划分静态和动态点,并使用遮挡掩码来划分闭塞点。掩模加权经线表明在执行经线操作时,将静态面膜和遮挡面膜用作权重。我们的设计被证明在消融实验中有效。实验结果表明,在现实世界中,3D场景流的无监督学习方法的前景是有希望的。
translated by 谷歌翻译